Hide table of contents

I'm a regular listener of the 80k hrs podcast. Our paper came up on the episode with Tristan Harris, and Rob encouraged responses on this forum so here I go.

Update: apologies, but this post will only make sense if you listen to this episode of the 80k hrs podcast.  


Conflict can be an effective tactic for good

I have a mini Nassim Taleb inside me that I let out for special occasions 😠. I'm sometimes rude to Tristan, Kevin Roose and others. It's not just because Tristan is worried about the possible negative impacts of social media (i'm not against that at all). It is because he has been one of the most influential people in building a white hot moral panic, and frequently bends truth for the cause.

One part he gets right is that, trigger a high-reach person into conflict with you, serves to give your message more attention. Even if they don't reply, you are more likely to be boosted by their detractors as well. This underdog advantage isn't "hate", and  the small advantage is massively outweighed by institutional status, finances and social proof. To play by gentlemans rules is to their advantage - curtailing the tools in at my disposal to makes bullshit as costly as possible.

I acknowledge there are some negative costs to this (e.g. polluting the information commons with avoidable conflict), and good people can disagree about if the tradeoff is worth it. But I believe it is.

Was Tristan & tech media successful in improving the YouTube's recommendation algorithm?

I'll give this win to Tristan and Roose. I believe YouTube did respond to this pressure when in early 2019 they reduced recommendations to conspiracies and borderline content and this was better overall, but not great.

But YouTube was probably never as they described - a recommendation rabbit hole to radicalization. If it was, there was never strong evidence to support it.

The YouTube recommendation has always boosted recent, highly watched videos, and has been through 3 main phases:

Clickbait Phase: Favoured high click-through rates on video thumbnails. This meant that clickbait thumbnails, were very "tabloidy" and edgy, and frequently misrepresented the content of the video. But no-one ever showed that this influenced users down an extremist rabbit hole - they just asserted it, or used very week attempts at evidence.

View-Neutral Phase: Favoured videos that people watch more of, and rated highly after watching. This was a big improvement for quality recommendations. They hadn't started putting their thumb on the scales, so the recommendations largely matched the portion of views for a video.

Authoritative Phase: Favours traditional media, especially highly partisan cable news. Very little recommendations to conspiracies and borderline content. This was announced early 2019, and deployed in April 2019.
 

Tristan regularly represents today's algorithm as a radicalization rabbit hole. His defence that critics are unfair because the algorithm changed after he made the critique is wrong. He didn't make any effort to clarify on the Social Dilemma (Released Jan 2020), or in his appearances about it, and hasn't updated his talking points. For example, speaking on the Joe Rogan Podcast in October 2020 he said: "no matter what I start with, what is it going to recommend next. So if you start with a WW2 video, YouTube recommends a bunch of holocaust denial videos".

What's the problem with scapegoating the algorithm and encouraging draconean platform moderation ?

Tristan's hyperbole sets the stage for drastic action. Draconian solutions for misdiagnosed problems will probably have unintended consequences that are worse than doing nothing. I wrote about this in regards to the the QAnon crackdown:

  • Demand for partisan conspiracy content is strong, which will be supplied by the internet in one way or another. There is big movement towards free speech platforms due to moderation, which (due to selection effects) are intense bubbles of far right and conspiracy content
  • Content moderation is building grievances that will not be easily placated. YouTube's moderation removes more than 1000x right vs left videos. With the current trajectory political content may end up largely separated into tribal platforms.
  • Scapegoating the scary algorithm, or adopting a puritanical approach to moderation will work against more practical and effective actions.

The anonymous user limitation of YouTube studies

It's technically quite difficult to analyse the YouTube algorithm that includes personalization. Our study was the most rigorous and comprehensive look at the recommendations political influence at the time, despite the limitation of collecting non-personalized recommendations. To take the results at face value, you need to assume this will "average out" to about the same influence once aggregated. I think it's an open question, but it's reasonable to assume the results will bein the same ballpark.

My experience with critics who point to this as a flaw or reason to ignore the results, are inconsistent with their skepticism. The metrics that Tristan uses in this podcast (e.g. "recommended flat Earth videos hundreds of millions of times") are based on Gualliam Chaslots data, which is also based on anonymous recommendations. I also am skeptical about these results:
-  These figures are much higher that what we see and Chaslot is not transparent about how these numbers have been calculated
- Chaslots data is based on the API, which gives distorted recommendations compared to our method of scraping the website (much closer to real-world). 

This quality of the research is quickly improving in this space. This most recent study uses real-world user traffic to estimate people following recommendations from videos. A very promising approach once they fix some issues.

We have been collecting personalized recommendations since november. We are analysing the results and will present there on transparency.tube and a paper in the coming months. I hope tristan and other prominent people will start update the way they talk about YouTube based on the best and latest research. If they continue to misdiagnose problems, the fervor for solutions they whip up will be misdirected.

What are the most effective ways we can address problems from social media

I have a narrow focus on the mechanics of YouTube's platform, but I'll give my intuitional grab bag of ideas that are the most promising to reduce the bad things about social media:

  • Building, or popularising apps/extensions that can be used by people to control their own future behaviour towards their higher order desires. The types of apps that Rob suggested are great and some are new to me (i.e. Inbox when ready, todobook, freedom, news feed eradicator).
  • Platform nudges like adjusting recommendations and information banners on misinformation with research into the effectiveness of these interventions
  • Grassroots efforts to cool down partisanship, like BraverAngels, with research into measuring the impact of these.
  • And for the easy one 😅. Addressing of corruption and decay in institutions (e.g. academia, media, politics), lack of state capacity, low growth, negative polarization and income inequality. Just fix those things and social media will be much less toxic.
     

Rob's did some really good background research and gently pushed back in the right places. The best interview with Tristan I have listened to.

71

0
0

Reactions

0
0

More posts like this

Comments21
Sorted by Click to highlight new comments since:

Thanks for writing this. I haven't (yet) listened to the podcast and that's perhaps why your post here felt like I was joining in the middle of a discussion. Could I suggest that at the top of your post you very briefly say who you are and what your main claim is, just so these are clear? I take it the claim is that YouTube's recommendations engine does not (contrary to recent popular opinion) push people towards polarisation and conspiracy theories. If that is your main claim, I'd like you to say why YouTube doesn't have that feature and why people who claim it does are mistaken.

(FWIW, I'm an old forum hand and I've learnt you can't expect people to read papers you link to. If you want people to discuss them, you need to make your main claims in the post here itself.)

In general I agree, but the forum guidelines do state "Polish: We'd rather see an idea presented imperfectly than not see it at all.", and this is a post explicitly billed as "response" that were invited by Rob. So if this is all the time Mark wants to spend on it, I feel it is perfectly fine to have a post that is only for people who have listened to the podcast/are aware of the debate.

Oh, what I said wasn't a criticism, so much as a suggestion to how more people might get up to speed on what's under debate!

Thanks for posting this! I thought it was interesting, and I would support more people writing up responses to 80 K podcasts.

Minor: you have a typo in your link to transparency.tube

Thankyou 🙏 Fixed.

Thanks for writing this and for your research in this area. Based on my own read of the literature, it seems broadly correct to me, and I wish that more people had an accurate impression of polarization on social media vs mainstream news and their relative effects.

While I think your position is much more correct than the conventional one, I did want to point to an interesting paper by Ro'ee Levy, which has some very good descriptive and casual statistics on polarization on Facebook: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3653388. It suggests (among many other interesting findings) that Facebook probably is somewhat more slanted than mainstream news and that this may drive a small but meaningful increase in affective polarization. That being said, it's unlikely to be the primary driver of US trends.

Hi Mark, thanks for writing this post. I only had a cursory reading of your linked paper and the 80k episode transcript, but my impression is that Tristan's main worry (as I understand it)  and your analysis are not incompatible:  

Tristan and parts of broader society fear that through the recommendation algorithm, users discover radicalizing content. According to your paper, the algorithm does not favour and might even  actively be biased against e.g conspiracy content.

 Again, I am not terribly familiar with  the whole discussion, but so far I have not yet seen the point made clearly (enough), that both these claims can be true: The algorithm could show less "radicalizing" content than an unbiased algorithm would, but  even these fewer recommendations could be enough to radicalize the viewers compared to a baseline where the algorithm would recommend no such content.  Thus, YouTube could be accused of not "doing enough". 

Your own paper cites this paper arguing that there is a clear pattern of viewership migration from moderate "Intellectual Dark Web" channels to alt-right content based on an analysis of user comments. Despite the limitation of using only user comments that your paper mentions, I think that commenting users are still a valid subset of all users and  their movement towards more radical content  needs to be explained, and that the recommendation algorithm is certainly a plausible explanation.  Since you have doubts about this hypothesis, may I ask if you think there are likelier ways these users have radicalized?

A way to test the role of the recommendation algorithm could be to redo the analysis of the user movement data for comments left after the change of the recommendation algorithm. If the movement is basically the same despite less recommendations for radical content, that is evidence that the recommendations never played a role like you argue in this post. If however the movement towards alt-right or radical content is lessened, it is reasonable to conclude that recommendations have played a role in the past, and by extension could still play a (smaller) role now.

I agree you can still criticize YouTube, even if they are recommending conspiracy content less than "view-neural". My main disagreement is with the facts - Tristan us representing YouTube is as a radicalization pipeline caused be the influence of recommendations. Let's say that YouTube is more radicalizing than a no-recommendation system all-things-considered because users were sure to click on radical content whenever it appeared. In this case you would describe radicalization as a demand from users, rather than a radialization rabbit hole caused by a manipulative algorithm. I'm open to this possibility, I wouldn't give this much pushback if this is what is being described.

The algorithmic "Auditing Radicalization Pathways on YouTube" paper is clever in the way they use comments to get at real-world movement. But that paper doesn't tell us much given that a) they didn't analyse movement form right to left (one way movement tell you churn, but nothing directional) and b) thy didn't share their data.

The best I have seen is this study, which uses real-world web usage forma representative sample of users to get the real behaviour of users who are clicking on recommendations. They are currently re-doing analysis with better classifications so we will see what happens.

Still doesn't fully answer your question tho. To get at the real influence of recommendation you will need to do actual experiments, something only YouTube can really do right now. Or if a third party was allowed to provide a youtube recsys somehow.

My suspicions about radicalization that leads to real word violence is mainly to do with things outside influence of analythims. Disillusionment, Experience of malevolence,  Grooming by terrorist ideologically violent religious/political groups.

Thank you for this post, Mark! I appreciate that you included the graph, though I'm not sure how to interpret it. Do you mind explaining what the "recommendation impression advantage" is? (I'm sure you explain this in great detail in your paper, so feel free to ignore me or say "go read the paper" :D).

The main question that pops out for me is "advantage relative to what?" I imagine a lot of people would say "even if YouTube's algorithm is less likely to recommend [conspiracy videos/propaganda/fake news] than [traditional media/videos about cats],  then it's still a problem! Any amount of recommending [bad stuff that is  harmful/dangerous/inaccurate] should not be tolerated!"

What would you say to those people?

Recommendation advantage is the ratio of impressions sent vs received. https://github.com/markledwich2/recfluence#calculations-and-considerations

Yeas, I agree with that. Definitely a lot of room for criticism and different points of view about what should be removed, or sans-recommended. My main effort here is to make sure people know what is happening.

Curated and popular this week
Relevant opportunities